Object goal navigation (ObjectNav) in unseen environments is a fundamental task for Embodied AI. Agents in existing works learn ObjectNav policies based on 2D maps, scene graphs, or image sequences. Considering this task happens in 3D space, a 3D-aware agent can advance its ObjectNav capability via learning from fine-grained spatial information. However, leveraging 3D scene representation can be prohibitively unpractical for policy learning in this floor-level task, due to low sample efficiency and expensive computational cost. In this work, we propose a framework for the challenging 3D-aware ObjectNav based on two straightforward sub-policies. The two sub-polices, namely corner-guided exploration policy and category-aware identification policy, simultaneously perform by utilizing online fused 3D points as observation. Through extensive experiments, we show that this framework can dramatically improve the performance in ObjectNav through learning from 3D scene representation. Our framework achieves the best performance among all modular-based methods on the Matterport3D and Gibson datasets, while requiring (up to 30x) less computational cost for training.
translated by 谷歌翻译
对于未来的家庭辅助机器人来说,在日常人类环境中了解和操纵不同的3D对象是必不可少的。旨在构建可以在各种3D形状上执行各种操纵任务的可扩展系统,最近的作品提倡并展示了有希望的结果学习视觉可行的负担能力,该结果标记了输入3D几何学上的每个点,并以完成下游任务的可能性(例如,推动下游任务)或接送)。但是,这些作品仅研究了单杆操纵任务,但是许多现实世界的任务需要两只手才能协作。在这项工作中,我们提出了一个新颖的学习框架Dualafford,以学习双手操纵任务的协作负担。该方法的核心设计是将两个抓手的二次问题减少到两个分离但相互联系的子任务中,以进行有效的学习。使用大规模的partnet-Mobility和Shapenet数据集,我们设置了四个基准任务,以进行双拖把操作。实验证明了我们方法比三个基线的有效性和优势。可以在https://hyperplane-lab.github.io/dualafford上找到其他结果和视频。
translated by 谷歌翻译
部件组件是机器人中的典型但具有挑战性的任务,机器人将一组各个部件组装成完整的形状。在本文中,我们开发了用于家具组件的机器人组装仿真环境。我们将零件装配任务制定为混凝土加固学习问题,并提出了一种机器人的管道,以学习组装多种椅子。实验表明,当使用看不见的椅子进行测试时,我们的方法在以上对象的环境下实现了74.5%的成功率,并在完整环境下实现了50.0%。我们采用RRT-CONNECT算法作为基线,在计算时间明显更长的时间后,只能实现18.8%的成功率。我们的项目网页提供了补充材料和视频。
translated by 谷歌翻译
与3D铰接物体感知和互动,例如橱柜,门和龙头,对未来的家庭助手机器人进行人类环境中的日常任务构成特殊挑战。除了解析铰接部件和联合参数外,研究人员最近倡导学习操纵在输入形状几何形状上,这是更加任务感知和几何细粒度的。然而,只采用​​被动观测作为输入,这些方法忽略了许多隐藏但重要的运动限制(例如,联合位置和限制)和动态因素(例如,关节摩擦和恢复),因此对这种不确定性的测试用例失去了显着的准确性。在本文中,我们提出了一个名为Adaaveword的新颖框架,该框架是学习的,以便在更准确地将可怜的实例特定的后医中迅速调整可怜的地前沿来执行很少的测试时间相互作用。我们使用Partnet-Mobility DataSet进行大规模实验,并证明我们的系统比基线更好。
translated by 谷歌翻译
在这项工作中,我们解决了主动摄像机定位的问题,该问题可积极控制相机运动以实现精确的相机姿势。过去的解决方案主要基于马尔可夫定位,从而减少了定位的位置摄像头的不确定性。这些方法将摄像机定位在离散姿势空间中,并且对定位驱动的场景属性不可知,从而限制了相机姿势的精度。我们建议通过由被动和主动定位模块组成的新型活动相机定位算法克服这些局限性。前者通过建立对点的摄像头通信来优化连续姿势空间中的相机姿势。后者明确对场景和相机不确定性组件进行建模,以计划正确的摄像头姿势估计的正确路径。我们在合成和扫描现实世界室内场景的挑战性本地化场景上验证了算法。实验结果表明,我们的算法表现优于基于马尔可夫定位的最先进的方法和优质相机姿势精度的其他方法。代码和数据在https://github.com/qhfang/accurateacl上发布。
translated by 谷歌翻译
域适应是神经机器翻译的重要挑战。但是,传统的微调解决方案需要多次额外的培训,并产生高昂的成本。在本文中,我们提出了一种非调节范式,通过基于及时的方法解决域的适应性。具体来说,我们构建了双语短语级数据库,并从中检索相关对作为输入句子的提示。通过利用检索到的短语级提示(REPP),我们有效地提高了翻译质量。实验表明,我们的方法改善了域特异性的机器翻译,可用于6.2 BLEU分数,并改善了在没有额外训练的情况下,精度为11.5%的翻译约束。
translated by 谷歌翻译
如何根据新出现的情况有效地调整神经电机翻译(NMT)模型而不会再培训?尽管神经机翻译成功,但更新部署的型号在线仍然是一个挑战。现有的非参数方法从数据库中检索类似的示例以指导翻译过程是有希望的,但容易被检索到的示例过度。在这项工作中,我们建议使用示例检索(Kster)进行内核平滑的翻译,这是一种在线调整神经计算机翻译模型的有效方法。域适应和多域机平移数据集的实验表明,即使没有昂贵的再培训,Kster也能够通过最佳现有在线适应方法实现1.1至1.5 BLEU分数的提高。代码和培训的型号在https://github.com/jiangqn/kster发布。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
The visual dimension of cities has been a fundamental subject in urban studies, since the pioneering work of scholars such as Sitte, Lynch, Arnheim, and Jacobs. Several decades later, big data and artificial intelligence (AI) are revolutionizing how people move, sense, and interact with cities. This paper reviews the literature on the appearance and function of cities to illustrate how visual information has been used to understand them. A conceptual framework, Urban Visual Intelligence, is introduced to systematically elaborate on how new image data sources and AI techniques are reshaping the way researchers perceive and measure cities, enabling the study of the physical environment and its interactions with socioeconomic environments at various scales. The paper argues that these new approaches enable researchers to revisit the classic urban theories and themes, and potentially help cities create environments that are more in line with human behaviors and aspirations in the digital age.
translated by 谷歌翻译